Learning Latent Variable Gaussian Graphical Model for Biomolecular Network with Low Sample Complexity
نویسندگان
چکیده
Learning a Gaussian graphical model with latent variables is ill posed when there is insufficient sample complexity, thus having to be appropriately regularized. A common choice is convex ℓ1 plus nuclear norm to regularize the searching process. However, the best estimator performance is not always achieved with these additive convex regularizations, especially when the sample complexity is low. In this paper, we consider a concave additive regularization which does not require the strong irrepresentable condition. We use concave regularization to correct the intrinsic estimation biases from Lasso and nuclear penalty as well. We establish the proximity operators for our concave regularizations, respectively, which induces sparsity and low rankness. In addition, we extend our method to also allow the decomposition of fused structure-sparsity plus low rankness, providing a powerful tool for models with temporal information. Specifically, we develop a nontrivial modified alternating direction method of multipliers with at least local convergence. Finally, we use both synthetic and real data to validate the excellence of our method. In the application of reconstructing two-stage cancer networks, "the Warburg effect" can be revealed directly.
منابع مشابه
Low Complexity Gaussian Latent Factor Models and a Blessing of Dimensionality
Learning the structure of graphical models from data usually incurs a heavy curse of dimensionality that renders this problem intractable in many real-world situations. The rare cases where the curse becomes a blessing provide insight into the limits of the efficiently computable and augment the scarce options for treating very under-sampled, high-dimensional data. We study a special class of G...
متن کاملScalable Latent Tree Model and its Application to Health Analytics
We present an integrated approach to structure and parameter estimation in latent tree graphical models, where some nodes are hidden. Our approach follows a “divide-and-conquer” strategy, and learns models over small groups of variables (where the grouping is obtained through preprocessing). A global solution is obtained in the end through simple merge steps. Our structure learning procedure in...
متن کاملSample Complexity Analysis for Learning Overcomplete Latent Variable Models through Tensor Methods
We provide guarantees for learning latent variable models emphasizing on the overcomplete regime, where the dimensionality of the latent space can exceed the observed dimensionality. In particular, we consider multiview mixtures, spherical Gaussian mixtures, ICA, and sparse coding models. We provide tight concentration bounds for empirical moments through novel covering arguments. We analyze pa...
متن کاملLearning Latent Variable Gaussian Graphical Models
Gaussian graphical models (GGM) have been widely used in many highdimensional applications ranging from biological and financial data to recommender systems. Sparsity in GGM plays a central role both statistically and computationally. Unfortunately, real-world data often does not fit well to sparse graphical models. In this paper, we focus on a family of latent variable Gaussian graphical model...
متن کاملSpectral Methods for Learning Multivariate Latent Tree Structure
This work considers the problem of learning the structure of multivariate linear tree models, whichinclude a variety of directed tree graphical models with continuous, discrete, and mixed latent variablessuch as linear-Gaussian models, hidden Markov models, Gaussian mixture models, and Markov evolu-tionary trees. The setting is one where we only have samples from certain observe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
دوره 2016 شماره
صفحات -
تاریخ انتشار 2016